150 research outputs found

    Real-Time Analysis of Correlations Between On-Body Sensor Nodes

    Get PDF
    The topology of a body sensor network has, until recently, often been overlooked; either because the layout of the network is deemed to be sufficiently static (”we always know well enough where sensors are”), we always know exactly where the nodes are or because the location of the sensor is not inherently required (”as long as the node stays where it is, we do not need its location, just its data”). We argue in this paper that, especially as the sensor nodes become more numerous and densely interconnected, an analysis on the correlations between the data streams can be valuable for a variety of purposes. Two systems illustrate how a mapping of the network’s sensor data to a topology of the sensor nodes’ correlations can be applied to reveal more about the physical structure of body sensor networks

    Issues in Recording Benchmark Sensor Data

    Get PDF
    Abstract. Sensors are rapidly following computing devices in popularity and widespread use; and as a result, protocols to interface, record and process sensor data have cropped up anywhere. This position paper lists some of the ‘lessons learned’ in the creation and application of sets of embedded sensor data, specifically used as tools in building context aware services where sensor values get classified into context descriptions

    Multi-Level Sensory Interpretation and Adaptation in a Mobile Cube

    Get PDF
    Signals from sensors are often analyzed in a sequence of steps, starting with the raw sensor data and eventually ending up with a classification or abstraction of these data. This paper will give a practical example of how the same information can be trained and used to initiate multiple interpretations of the same data on different, application-oriented levels. Crucially, the focus is on expanding embedded analysis software, rather than adding more powerful, but possibly resource-hungry, sensors. Our illustration of this approach involves a tangible input device the shape of a cube that relies exclusively on lowcost accelerometers. The cube supports calibration with user supervision, it can tell which of its sides is on top, give an estimate of its orientation relative to the user, and recognize basic gestures

    A Surface-based In-House Network Medium for Power, Communication and Interaction

    Get PDF
    Recent advances in communication and signal processing methodologies have paved the way for a high speed home network Power Line Communication (PLC) system. The development of powerline communications and powerline control as a cost effective and rapid mechanism for delivering communication and control services are becoming attractive in PLC application, to determine the best mix of hard and software to support infrastructure development for particular applications using power line communication. Integrating appliances in the home through a wired network often proves to be impractical: routing cables is usually difficult, changing the network structure afterwards even more so, and portable devices can only be connected at fixed connection points. Wireless networks aren’t the answer either: batteries have to be regularly replaced or changed, and what they add to the device’s size and weight might be disproportionate for smaller appliances. In Pin&Play, we explore a design space in between typical wired and wireless networks, investigating the use of surfaces to network objects that are attached to it. This article gives an overview of the network model, and describes functioning prototypes that were built as a proof of concept. The first phase of the development is already demonstrated both in appropriate conferences and publications. [1] The intention of researchers is to introduce this work to powerline community; as this research enters phase II of the Pin&Play architecture to investigate, develop prototype systems, and conduct studies in two concrete application areas. The first area is user-centric and concerned with support for collaborative work on large surfaces. The second area is focused on exhibition spaces and trade fairs, and concerned with combination of physical media such as movable walls and digital infrastructure for fast deployment of engaging installations. In this paper we have described the functionality of the Pin&Play architecture and introduced the second phase together with future plans. Figure 1 shows technical approach, using a surface with simple layered structure Pushpin connectors, dual pin or coaxial

    Creativity in Ubiquitous Computing Research

    Get PDF
    This paper is concerned with the process of creating and designing research prototypes for augmented objects and applications in ubiquitous computing. We present a range of descriptions and reflections from personal experience in building prototypes for ubiquitous computing research, while students were introduced and guided in this process. This is linked to a rationale of the process as well as the way it affects built-in experience and knowledge and its needs to transform teaching and learning in these domains

    WEAR: A Multimodal Dataset for Wearable and Egocentric Video Activity Recognition

    Full text link
    Though research has shown the complementarity of camera- and inertial-based data, datasets which offer both modalities remain scarce. In this paper we introduce WEAR, a multimodal benchmark dataset for both vision- and wearable-based Human Activity Recognition (HAR). The dataset comprises data from 18 participants performing a total of 18 different workout activities with untrimmed inertial (acceleration) and camera (egocentric video) data recorded at 10 different outside locations. WEAR features a diverse set of activities which are low in inter-class similarity and, unlike previous egocentric datasets, not defined by human-object-interactions nor originate from inherently distinct activity categories. Provided benchmark results reveal that single-modality architectures have different strengths and weaknesses in their prediction performance. Further, in light of the recent success of transformer-based video action detection models, we demonstrate their versatility by applying them in a plain fashion using vision, inertial and combined (vision + inertial) features as input. Results show that vision transformers are not only able to produce competitive results using only inertial data, but also can function as an architecture to fuse both modalities by means of simple concatenation, with the multimodal approach being able to produce the highest average mAP, precision and close-to-best F1-scores. Up until now, vision-based transformers have neither been explored in inertial nor in multimodal human activity recognition, making our approach the first to do so. The dataset and code to reproduce experiments is publicly available via: mariusbock.github.io/wearComment: 12 pages, 2 figures, 2 table
    • …
    corecore